39 research outputs found

    A systematic comparison of static and dynamic cues for depth perception

    Get PDF
    Purpose: A clinical diagnosis of stereoblindness does not necessarily preclude compelling depth perception. Qualitative observations suggest that this may be due to the dynamic nature of the stimuli. The purpose of this study was to systematically investigate the effectiveness of static and dynamic stereoscopic stimuli. Methods: Stereoscopic stimuli were presented on a passive polarized stereoscopic monitor and were manipulated as follows: static disparity (baseline condition), dynamic disparity (change in z-location), change in stimulus pattern, change in z-location with pattern change, change in x-location (horizontal shift), a control (nil-disparity signal). All depth-detection thresholds were measured simultaneously using an adaptive four-alternative-forced-choice (4AFC) paradigm with all six conditions randomly interleaved. Results: A total of 127 participants (85 women, 42 men; mean [SD] age, 21 [5] years) with visual acuity better than 0.22 logMAR in both eyes were assessed. In comparison to the static disparity condition, depth-detection thresholds were up to 50% lower for the dynamic disparity conditions, with and without pattern change (P < 0.001). The presence of a changing pattern in isolation (P = 0.71) or a horizontal shift (P = 0.41) did not affect the thresholds. Conclusions: Dynamic disparity information facilitates the extraction of depth in comparison to static disparity signals. This finding may account for the compelling perception of depth reported in individuals with no measurable static stereoacuity. Our findings challenge the traditional definition of stereoblindness and suggest that current diagnostic tests using static stimuli may be suboptimal. We argue that both static and dynamic stimuli should be employed to fully assess the binocular potential of patients when considering management options

    Physicochemical features partially explain olfactory crossmodal correspondences

    Get PDF
    During the olfactory perception process, our olfactory receptors are thought to recognize specific chemical features. These features may contribute towards explaining our crossmodal perception. The physicochemical features of odors can be extracted using an array of gas sensors, also known as an electronic nose. The present study investigates the role that the physicochemical features of olfactory stimuli play in explaining the nature and origin of olfactory crossmodal correspondences, which is a consistently overlooked aspect of prior work. Here, we answer the question of whether the physicochemical features of odors contribute towards explaining olfactory crossmodal correspondences and by how much. We found a similarity of 49% between the perceptual and the physicochemical spaces of our odors. All of our explored crossmodal correspondences namely, the angularity of shapes, smoothness of textures, perceived pleasantness, pitch, and colors have significant predictors for various physicochemical features, including aspects of intensity and odor quality. While it is generally recognized that olfactory perception is strongly shaped by context, experience, and learning, our findings show that a link, albeit small (6–23%), exists between olfactory crossmodal correspondences and their underlying physicochemical features

    Artificial Odour-Vision Syneasthesia via Olfactory Sensory Argumentation

    Get PDF
    The phenomenology of synaesthesia provides numerous cognitive benefits, which could be used towards augmenting interactive experiences with more refined multisensorial capabilities leading to more engaging and enriched experiences, better designs, and more transparent human-machine interfaces. In this study, we report a novel framework for the transformation of odours into the visual domain by applying the ideology from synaesthesia, to a low cost, portable, augmented reality/virtual reality system. The benefits of generating an artificial form of synesthesia are outlined and implemented using a custom made electronic nose to gather information about odour sources which is then sent to a mobile computing engine for characterisation, classification, and visualisation. The odours are visualised in the form of coloured 2D abstract shapes in real-time. Our results show that our affordable system has the potential to increase human odour discrimination comparable to that of natural syneasthesia highlighting the prospects for augmenting human-machine interfaces with an artificial form of this phenomenon

    Indiscriminable sounds determine the direction of visual motion

    Get PDF
    On cross-modal interactions, top-down controls such as attention and explicit identification of cross-modal inputs were assumed to play crucial roles for the optimization. Here we show the establishment of cross-modal associations without such top-down controls. The onsets of two circles producing apparent motion perception were accompanied by indiscriminable sounds consisting of six identical and one unique sound frequencies. After adaptation to the visual apparent motion with the sounds, the sounds acquired a driving effect for illusory visual apparent motion perception. Moreover, the pure tones with each unique frequency of the sounds acquired the same effect after the adaptation, indicating that the difference in the indiscriminable sounds was implicitly coded. We further confrimed that the aftereffect didnot transfer between eyes. These results suggest that the brain establishes new neural representations between sound frequency and visual motion without clear identification of the specific relationship between cross-modal stimuli in early perceptual processing stages

    Combining S-cone and luminance signals adversely affects discrimination of objects within backgrounds

    Get PDF
    The visual system processes objects embedded in complex scenes that vary in both luminance and colour. In such scenes, colour contributes to the segmentation of objects from backgrounds, but does it also affect perceptual organisation of object contours which are already defined by luminance signals, or are these processes unaffected by colour’s presence? We investigated if luminance and chromatic signals comparably sustain processing of objects embedded in backgrounds, by varying contrast along the luminance dimension and along the two cone-opponent colour directions. In the first experiment thresholds for object/non-object discrimination of Gaborised shapes were obtained in the presence and absence of background clutter. Contrast of the component Gabors was modulated along single colour/luminance dimensions or co-modulated along multiple dimensions simultaneously. Background clutter elevated discrimination thresholds only for combined S-(L + M) and L + M signals. The second experiment replicated and extended this finding by demonstrating that the effect was dependent on the presence of relatively high S-(L + M) contrast. These results indicate that S-(L + M) signals impair spatial vision when combined with luminance. Since S-(L + M) signals are characterised by relatively large receptive fields, this is likely to be due to an increase in the size of the integration field over which contour-defining information is summed

    Neural correlates of audiovisual motion capture

    Get PDF
    Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory processing stages

    A neural signature of the unique hues

    Get PDF
    Since at least the 17th century there has been the idea that there are four simple and perceptually pure “unique” hues: red, yellow, green, and blue, and that all other hues are perceived as mixtures of these four hues. However, sustained scientific investigation has not yet provided solid evidence for a neural representation that separates the unique hues from other colors. We measured event-related potentials elicited from unique hues and the ‘intermediate’ hues in between them. We find a neural signature of the unique hues 230 ms after stimulus onset at a post-perceptual stage of visual processing. Specifically, the posterior P2 component over the parieto-occipital lobe peaked significantly earlier for the unique than for the intermediate hues (Z = -2.9, p = .004). Having identified a neural marker for unique hues, fundamental questions about the contribution of neural hardwiring, language and environment to the unique hues can now be addressed

    NICE : A Computational solution to close the gap from colour perception to colour categorization

    Get PDF
    The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms
    corecore